Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
Algorithms ; 16(5), 2023.
Article in English | Web of Science | ID: covidwho-20230787

ABSTRACT

Deception in computer-mediated communication represents a threat, and there is a growing need to develop efficient methods of detecting it. Machine learning models have, through natural language processing, proven to be extremely successful at detecting lexical patterns related to deception. In this study, four selected machine learning models are trained and tested on data collected through a crowdsourcing platform on the topics of COVID-19 and climate change. The performance of the models was tested by analyzing n-grams (from unigrams to trigrams) and by using psycho-linguistic analysis. A selection of important features was carried out and further deepened with additional testing of the models on different subsets of the obtained features. This study concludes that the subjectivity of the collected data greatly affects the detection of hidden linguistic features of deception. The psycho-linguistic analysis alone and in combination with n-grams achieves better classification results than an n-gram analysis while testing the models on own data, but also while examining the possibility of generalization, especially on trigrams where the combined approach achieves a notably higher accuracy of up to 16%. The n-gram analysis proved to be a more robust method during the testing of the mutual applicability of the models while psycho-linguistic analysis remained most inflexible.

2.
Artif Life Robot ; : 1-11, 2023 Apr 28.
Article in English | MEDLINE | ID: covidwho-2319982

ABSTRACT

Given the ongoing COVID-19 pandemic, remote interviews have become an increasingly popular approach in many fields. For example, a survey by the HR Research Institute (PCR Institute in Survey on hiring activities for graduates of 2021 and 2022. https://www.hrpro.co.jp/research_detail.php?r_no=273. Accessed 03 Oct 2021) shows that more than 80% of job interviews are conducted remotely, particularly in large companies. However, for some reason, an interviewee might attempt to deceive an interviewer or feel difficult to tell the truth. Although the ability of interviewers to detect deception among interviewees is significant for their company or organization, it still strongly depends on their individual experience and cannot be automated. To address this issue, in this study, we propose a machine learning approach to aid in detecting whether a person is attempting to deceive the interlocutor by associating the features of their facial expressions with those of their pulse rate. We also constructed a more realistic dataset for the task of deception detection by asking subjects not to respond artificially, but rather to improvise natural responses using a web camera and wearable device (smartwatch). The results of an experimental evaluation of the proposed approach with 10-fold cross-validation using random forests classifier show that the accuracy and the F1 value were in the range between 0.75 and 0.8 for each subject, and the highest values were 0.87 and 0.88, respectively. Through the analysis of the importance of the features the trained models, we revealed the crucial features of each subject during deception, which differed among the subjects.

3.
International Journal of Human-Computer Studies ; 174:N.PAG-N.PAG, 2023.
Article in English | Academic Search Complete | ID: covidwho-2272296

ABSTRACT

• Observers fixated longer on the mouth and torso of speakers when those were deceptive. • Observers fixated longer on the hands of the speakers when those were honest. • When assessing veracity, unexpectedly, observers fixated on the mouth the most compared to the eyes, torso, or hands of the speakers. • Longer fixations on the mouth and torso of the speakers were associated with less credible assessment of the speakers. • Longer gaze fixations on the torso and left hand of the speakers worsened deception detection accuracy. Throughout the early part of this century, and especially during the peak of the global pandemic of 2020, the world has come to rely increasingly on computer-mediated communication (CMC). The study of computer-based media and their role in mediating communication has long been a part of the academic study of information systems. Unfortunately, human communication, regardless of the medium over which it occurs, involves deception. Despite the growing reliance on CMC for communication, a limited amount of work has considered deception and its detection in mediated environments. The study reported here investigates the communication issues associated with cue restrictions in CMC, specifically videoconferencing, and with how these restrictions affect deception detection success. We employed eye tracking technology to analyze the visual behavior of veracity judges and how it influenced their assessments. We found that the visual foci of the judges varied as a result of the message veracity. Judges fixated longer on the mouth and torso of speakers when messages were deceptive and focused longer on the hands of the speakers when messages were truthful. We also found that fixating longer on the mouth and torso of the speakers was associated with less credible assessment of the speakers. Last, longer gaze fixations on the torso and left hand of the speakers resulted in less accurate deception detection performance. [ABSTRACT FROM AUTHOR] Copyright of International Journal of Human-Computer Studies is the property of Academic Press Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

4.
12th ACM Conference on Data and Application Security and Privacy, CODASPY 2022 ; : 349-351, 2022.
Article in English | Scopus | ID: covidwho-1874740

ABSTRACT

A recent survey claims that there are\em no general linguistic cues for deception. Since Internet societies are plagued with deceptive attacks such as phishing and fake news, this claim means that we must build individual datasets and detectors for each kind of attack. It also implies that when a new scam (e.g., Covid) arrives, we must start the whole process of data collection, annotation, and model building from scratch. In this paper, we put this claim to the test by building a quality domain-independent deception dataset and investigating whether a model can perform well on more than one form of deception. © 2022 Owner/Author.

SELECTION OF CITATIONS
SEARCH DETAIL